10 research outputs found

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; ε) approaches the rate-distortion function R(d) as R(n; d; ε) = R(d)+ √ V(d)/n Q–1(ε)+o(1√n), where V (d) is the dispersion, ε ε 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≥ σ^2 (1+|σ|)^2, R(n; d; ε) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    From Parameter Estimation to Dispersion of Nonstationary Gauss-Markov Processes

    Get PDF
    This paper provides a precise error analysis for the maximum likelihood estimate â (u) of the parameter a given samples u = (u 1 , … , u n )^⊤ drawn from a nonstationary Gauss-Markov process U i = aU i−1 + Z i , i ≥ 1, where a > 1, U 0 = 0, and Z i ’s are independent Gaussian random variables with zero mean and variance σ^2 . We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula derived in our previous work [1] for the (asymptotically) stationary Gauss-Markov sources, i.e., |a| < 1. New ideas in the nonstationary case include a deeper understanding of the scaling of the maximum eigenvalue of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i=aU_(i-1)+ Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d, ε) approaches the rate-distortion function R(d) as R(n, d, ε) = R(d)+√{[V(d)/n]}Q^(-1)(ε)+o([1/(√n)]), where V(d) is the dispersion, ε ∈ (0,1) is the excess-distortion probability, and Q^(-1) is the inverse of the Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 <; d ≤ σ2/(1+|a|)^2 ,R(n, d, c) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; ε) approaches the rate-distortion function R(d) as R(n; d; ε) = R(d)+ √ V(d)/n Q–1(ε)+o(1√n), where V (d) is the dispersion, ε ε 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≥ σ^2 (1+|σ|)^2, R(n; d; ε) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    From Parameter Estimation to Dispersion of Nonstationary Gauss-Markov Processes

    Get PDF
    This paper provides a precise error analysis for the maximum likelihood estimate â (u) of the parameter a given samples u = (u 1 , … , u n )^⊤ drawn from a nonstationary Gauss-Markov process U i = aU i−1 + Z i , i ≥ 1, where a > 1, U 0 = 0, and Z i ’s are independent Gaussian random variables with zero mean and variance σ^2 . We show a tight nonasymptotic exponentially decaying bound on the tail probability of the estimation error. Unlike previous works, our bound is tight already for a sample size of the order of hundreds. We apply the new estimation bound to find the dispersion for lossy compression of nonstationary Gauss-Markov sources. We show that the dispersion is given by the same integral formula derived in our previous work [1] for the (asymptotically) stationary Gauss-Markov sources, i.e., |a| < 1. New ideas in the nonstationary case include a deeper understanding of the scaling of the maximum eigenvalue of the covariance matrix of the source sequence, and new techniques in the derivation of our estimation error bound

    The Dispersion of the Gauss–Markov Source

    No full text

    Arbitrarily varying networks: capacity-achieving computationally efficient codes

    No full text
    We consider the problem of communication over a network containing a hidden and malicious adversary that can control a subset of network resources, and aims to disrupt communications. We focus on omniscient node-based adversaries, i.e., the adversaries can control a subset of nodes, and know the message, network code and packets on all links. Characterizing information-theoretically optimal communication rates as a function of network parameters and bounds on the adversarially controlled network is in general open, even for unicast (single source, single destination) problems. In this work we characterize the information-theoretically optimal randomized capacity of such problems, i.e., under the assumption that the source node shares (an asymptotically negligible amount of) independent common randomness with each network node a priori (for instance, as part of network design). We propose a novel computationally-efficient communication scheme whose rate matches a natural information-theoretically "erasure outer bound" on the optimal rate. Our schemes require no prior knowledge of network topology, and can be implemented in a distributed manner as an overlay on top of classical distributed linear network coding.Comment: 13 pages, 7 figures, long version of ISIT pape

    The Minimal Directed Information Needed to Improve the LQG Cost

    No full text
    We study a linear quadratic Gaussian (LQG) control problem, in which a noisy observation of the system state is available to the controller. To lower the achievable LQG cost, we introduce an extra communication link from the system to the controller. We investigate the trade-off between the improved LQG cost and the consumed communication (information) resources that are measured with the conditional directed information. The objective is to minimize the directed information over all encoding-decoding policies subject to a constraint on the LQG cost. The main result is a semidefinite programming formulation for the optimization problem in the finite-horizion scenario where the dynamical system may have time-varying parameters. This result extends the seminal work by Tanaka et al., where the direct noisy measurement of the system state at the controller is assumed to be absent. As part of our derivation to show the optimality of an encoder that transmits a Gaussian measurement of the state, we show that the presence of the noisy measurements at the encoder can not reduce the minimal directed information, extending a prior result of Kostina and Hassibi to the vector case. Finally, we show that the results in the finite-horizon case can be extended to the infinite-horizon scenario when assuming a time-invariant system, but possibly a time-varying policy. We show that the solution for this optimization problem can be realized by a time-invariant policy whose parameters can be computed explicitly from a finite-dimensional semidefinite program
    corecore